2,163 research outputs found

    The Multiobjective Average Network Flow Problem: Formulations, Algorithms, Heuristics, and Complexity

    Get PDF
    Integrating value focused thinking with the shortest path problem results in a unique formulation called the multiobjective average shortest path problem. We prove this is NP-complete for general graphs. For directed acyclic graphs, an efficient algorithm and even faster heuristic are proposed. While the worst case error of the heuristic is proven unbounded, its average performance on random graphs is within 3% of the optimal solution. Additionally, a special case of the more general biobjective average shortest path problem is given, allowing tradeoffs between decreases in arc set cardinality and increases in multiobjective value; the algorithm to solve the average shortest path problem provides all the information needed to solve this more difficult biobjective problem. These concepts are then extended to the minimum cost flow problem creating a new formulation we name the multiobjective average minimum cost flow. This problem is proven NP-complete as well. For directed acyclic graphs, two efficient heuristics are developed, and although we prove the error of any successive average shortest path heuristic is in theory unbounded, they both perform very well on random graphs. Furthermore, we define a general biobjective average minimum cost flow problem. The information from the heuristics can be used to estimate the efficient frontier in a special case of this problem trading off total flow and multiobjective value. Finally, several variants of these two problems are discussed. Proofs are conjectured showing the conditions under which the problems are solvable in polynomial time and when they remain NP-complete

    Updating Optimal Decisions Using Game Theory and Exploring Risk Behavior through Response Surface Methodology

    Get PDF
    This thesis utilizes game theory within a framework for updating optimal decisions based on new information as it becomes available. Methodology is developed that allows a decision maker to change his perceived optimal policy based on available knowledge of the opponents strategy, where the opponent is a rational decision maker or a random component nature. Utility theory is applied to account for the different risk preferences of the decision makers. Furthermore, response surface methodology is used to explore good risk strategies for the decision maker to approach each situation with. The techniques are applied to a combat scenario, a football game, and a terrorist resource allocation problem, providing a decision maker with the best possible strategy given the information available to him. The results are intuitive and exemplify the benefits of using the methods

    Evaluating the Impacts of Technology Education on Military Maintenance Students

    Get PDF
    The United States Air Force (USAF) provides career and technical education (CTE) to a wide variety of specialty career fields. Training airmen to carry out the mission while honoring the USAF core values of integrity first, service before self, and excellence in all we do is the top priority of military leaders and trainers. Vehicle maintenance is especially important as one minor malfunction could cause multiple injuries and deaths. Vehicle maintainers are thus trained in grueling learning environments and follow arduous regulations to ensure the utmost adherence to standards. This paper presents the findings of a recent study at the Port Hueneme Naval Station in California, home of the technical school of Air Force Vehicle maintenance. The results focus on three specific areas that contribute to performance: student learning preferences, Armed Services Vocational Aptitude Battery (ASVAB) scores and other personal characteristics, and a comparison of alternate training aids

    Electromagnetic Chirps from Neutron Star-Black Hole Mergers

    Get PDF
    We calculate the electromagnetic signal of a gamma-ray flare coming from the surface of a neutron star shortly before merger with a black hole companion. Using a new version of the Monte Carlo radiation transport code Pandurata that incorporates dynamic spacetimes, we integrate photon geodesics from the neutron star surface until they reach a distant observer or are captured by the black hole. The gamma-ray light curve is modulated by a number of relativistic effects, including Doppler beaming and gravitational lensing. Because the photons originate from the inspiraling neutron star, the light curve closely resembles the corresponding gravitational waveform: a chirp signal characterized by a steadily increasing frequency and amplitude. We propose to search for these electromagnetic chirps using matched filtering algorithms similar to those used in LIGO data analysis.Comment: 13 pages, 5 figures, submitted to Ap

    Sex differences in exercise-induced diaphragmatic fatigue in endurance-trained athletes

    Get PDF
    There is evidence that female athletes may be more susceptible to exercise-induced arterial hypoxemia and expiratory flow limitation and have greater increases in operational lung volumes during exercise relative to men. These pulmonary limitations may ultimately lead to greater levels of diaphragmatic fatigue in women. Accordingly, the purpose of this study was to determine whether there are sex differences in the prevalence and severity of exercise-induced diaphragmatic fatigue in 38 healthy endurance-trained men (n = 19; maximal aerobic capacity = 64.0 ± 1.9 ml·kg–1·min–1) and women (n = 19; maximal aerobic capacity = 57.1 ± 1.5 ml·kg–1·min–1). Transdiaphragmatic pressure (Pdi) was calculated as the difference between gastric and esophageal pressures. Inspiratory pressure-time products of the diaphragm and esophagus were calculated as the product of breathing frequency and the Pdi and esophageal pressure time integrals, respectively. Cervical magnetic stimulation was used to measure potentiated Pdi twitches (Pdi,tw) before and 10, 30, and 60 min after a constant-load cycling test performed at 90% of peak work rate until exhaustion. Diaphragm fatigue was considered present if there was a 15% reduction in Pdi,tw after exercise. Diaphragm fatigue occurred in 11 of 19 men (58%) and 8 of 19 women (42%). The percent drop in Pdi,tw at 10, 30, and 60 min after exercise in men (n = 11) was 30.6 ± 2.3, 20.7 ± 3.2, and 13.3 ± 4.5%, respectively, whereas results in women (n = 8) were 21.0 ± 2.1, 11.6 ± 2.9, and 9.7 ± 4.2%, respectively, with sex differences occurring at 10 and 30 min (P < 0.05). Men continued to have a reduced contribution of the diaphragm to total inspiratory force output (pressure-time product of the diaphragm/pressure-time product of the esophagus) during exercise, whereas diaphragmatic contribution in women changed very little over time. The findings from this study point to a female diaphragm that is more resistant to fatigue relative to their male counterparts

    SPARC: Statistical Performance Analysis With Relevance Conclusions

    Get PDF
    The performance of one computer relative to another is traditionally characterized through benchmarking, a practice occasionally deficient in statistical rigor. The performance is often trivialized through simplified measures, such as the approach of central tendency, but doing so risks a loss of perspective of the variability and non-determinism of modern computer systems. Authentic performance evaluations are derived from statistical methods that accurately interpret and assess data. Methods that currently exist within performance comparison frameworks are limited in efficacy, statistical inference is either overtly simplified or altogether avoided. A prevalent criticism from computer performance literature suggests that the results from difference hypothesis testing lack substance. To address this problem, we propose a new framework, SPARC, that pioneers a synthesis of difference and equivalence hypothesis testing to provide relevant conclusions. It is a union of three key components: (i) identifying either superiority or similarity through difference and equivalence hypotheses (ii) scalable methodology (based on the number of benchmarks), and (iii) a conditional feedback loop from test outcomes that produces informative conclusions of relevance, equivalence, trivial, or indeterminant. We present an experimental analysis characterizing the performance of a trio of RISC-V open-source processors to evaluate SPARC and its efficacy compared to similar frameworks

    Use of Redox Probes for Characterization of Layer-by-Layer Gold Nanoparticle-Modified Screen-Printed Carbon Electrodes

    Get PDF
    The electrochemical characteristics of bare and surface-modified screen-printed carbon electrodes (SPCEs) were compared using voltammetric responses of common redox probes to determine the potential role of nanomaterials in previously documented signal enhancement. SPCEs modified with gold nanoparticles (AuNPs) by layer-by-layer (LbL) electrostatic adsorption were previously reported to exhibit an increase in voltammetric signal for Fe(CN)63−/4− that corresponds to an improvement of 102% in electroactive surface area over bare SPCEs. AuNP-modified SPCEs prepared by the same LbL strategy using the polycation poly(diallyldimethylammonium chloride) (PDDA) here were found to provide no beneficial increase in electroactive surface area over bare SPCEs. Though similar improvement in voltammetric signal of Fe(CN)63−/4− was found for AuNP/PDDA-modified compared to bare SPCEs in these studies, results with other redox couples ferrocene methanol (FcMeOH/FcMeOH+) and Ru(NH3)63+/2+ indicated no difference between the electroactive surface areas of modified and bare SPCEs. Furthermore, gold present on AuNP/PDDA-modified SPCEs accounted for only 62 (±12)% of the electroactive surface area. The previously reported improvement in electroactive surface area that was attributed to the inclusion of AuNPs on the SPCE surface appears to have resulted from a misinterpretation of the non-ideal behavior of Fe(CN)63− as a redox probe for bare SPCEs

    An exploratory survey on the awareness and usage of clinical practice guidelines among clinical pharmacists

    Get PDF
    Background: The NHLBI has not developed clinical practice guidelines since 2007. As a result, multiple organizations have released competing guidelines. This has created confusion and debate among clinicians as to which recommendations are most applicable for practice. Objectives: To explore preliminary attitudes, awareness, and usage of clinical practice guidelines in practice and teaching for hypertension, dyslipidemia and asthma among clinical pharmacists. Methods: Clinical pharmacists across the US were surveyed electronically over a two week period in Spring 2019 regarding utilization and knowledge of practice guidelines for hypertension, dyslipidemia, and asthma. Clinical cases were included to evaluate application of guidelines. Descriptive statistics, Chi-square analysis, and Wilcoxon signed-rank test were conducted. Statistical significance level was set to 0.01 to account for multiple tests conducted on the same survey participants. Results: Forty-eight, 34, and 28 pharmacists voluntarily completed hypertension, dyslipidemia, and asthma survey questions, respectively. Interactions by disease state (p \u3c 0.001) revealed more pharmacists (93%) reporting to have ≤50% patient load in managing asthma and more pharmacists (95%) had read the full summary/report of the most recent hypertension guideline. Primary reasons why the most recent guideline was not selected were also significantly different by disease state (interaction; p \u3c 0.001). For dyslipidemia and asthma, pharmacists had a higher mean rating of agreement (p \u3c0.007) in having the most confidence in the most recent as compared to older guidelines. Proportionally more clinical cases were answered correctly (interaction; p \u3c0.001) when pharmacists applied the most recent guideline for hypertension (84%), while the opposite outcome was found for asthma (27%). Conclusion: While more pharmacists selected the most recent guideline for practice and teaching, there was inconsistent application of guidelines to clinical cases. Further studies with a larger representation of pharmacists are warranted to more definitively determine factors influencing guideline preference and usage

    Poor fit to the multispecies coalescent is widely detectable in empirical data

    Get PDF
    Model checking is a critical part of Bayesian data analysis, yet it remains largely unused in systematic studies. Phylogeny estimation has recently moved into an era of increasingly complex models that simultaneously account for multiple evolutionary processes, the statistical fit of these models to the data has rarely been tested. Here we develop a posterior predictive simulation-based model check for a commonly used multispecies coalescent model, implemented in *BEAST, and apply it to 25 published data sets. We show that poor model fit is detectable in the majority of data sets; that this poor fit can mislead phylogenetic estimation; and that in some cases it stems from processes of inherent interest to systematists. We suggest that as systematists scale up to phylogenomic data sets, which will be subject to a heterogeneous array of evolutionary processes, critically evaluating the fit of models to data is an analytical step that can no longer be ignored. [Gene duplication and extinction; gene tree; hybridization; model fit; multispecies coalescent; next-generation sequencing; posterior predictive simulation; species delimitation; species tree.] © The Author(s) 2013
    corecore